Goto

Collaborating Authors

 life-and-death decision


The Download: making tough decisions with AI, and the significance of toys

MIT Technology Review

This week, I've been working on a piece about an AI-based tool that could help guide end-of-life care. We're talking about the kinds of life-and-death decisions that come up for very unwell people. Often, the patient isn't able to make these decisions--instead, the task falls to a surrogate. It can be an extremely difficult and distressing experience. A group of ethicists have an idea for an AI tool that they believe could help make things easier.


Why AI shouldn't be making life-and-death decisions

MIT Technology Review

Let me introduce you to Philip Nitschke, also known as "Dr. Death" or "the Elon Musk of assisted suicide." Nitschke has a curious goal: He wants to "demedicalize" death and make assisted suicide as unassisted as possible through technology. As my colleague Will Heaven reports, Nitschke has developed a coffin-size machine called the Sarco. People seeking to end their lives can enter the machine after undergoing an algorithm-based psychiatric self-assessment.

  Country:
  Industry: Health & Medicine (0.36)

Activists warn UN about dangers of using AI to make life-and-death decision on the battlefield

Daily Mail - Science & tech

A Nobel Peace prize winner has warned against robots making life-and-death decision on the battlefield, as it is'unethical and immoral' and can never be undone. Jody Williams made the statement at the United Nations in New York City after the US military announced its project the uses AI to make decisions on what human soldiers should target and destroy. Williams also pointed out the difficulty of holding those involved accountable for certain war crimes, as there will be a programmer, manufacturer, commander and the machine itself involved in the act. Jody Williams (right) has warned against robots making life-and-death decision on the battlefield, as it is'unethical and immoral' and'can never be undone'. She was accompanied with fellow activists Liz O'Sullivan (left) and Mary Wareham (center) Williams won the prestigious accolade in 1997 after leading efforts to ban landmines and is now an advocate with the'Campaign To Stop Killer Robots'.


Can AI Be Trusted With Life-And-Death Decisions?

#artificialintelligence

It was more than 20 years ago that a computer powered by artificial intelligence first beat humans at their own game. In a chess match that reverberated throughout the world against world champion Garry Kasparov, IBM's Deep Blue successfully defeated its human opponent, making it crystal clear: Computer capabilities had surpassed humans in certain challenges. As we enter 2018, the opportunities to utilize AI to augment human capabilities have never been greater. But while dominating a game of chess is one thing, the bigger question remains: Has AI come far enough to be entrusted with far more important, life-and-death decisions? The autonomous car industry is a prime example of this dilemma.


Asimov's 4th Law of Robotics

@machinelearnbot

Like me, I'm sure that many of you nerds have read the book "I, Robot." "I, Robot" is the seminal book written by Isaac Asimov (actually it was a series of books, but I only read the one) that explores the moral and ethical challenges posed by a world dominated by robots. But I read that book like 50 years ago, so the movie "I, Robot" with Will Smith is actually more relevant to me today. The movie does a nice job of discussing the ethical and moral challenges associated with a society where robots play such a dominant and crucial role in everyday life. Both the book and the movie revolve around the "Three Laws of Robotics," which are: It's like the "3 Commandments" of being a robot; adhere to these three laws and everything will be just fine. Unfortunately, that turned out not to be true (if 10 commandments can not effectively govern humans, how do we expect just 3 to govern robots?).